Research on indoor thermoacoustic environment evaluation based on facial expression recognition

Yao Runming, Sun Chuanyue, Han Shiyu, Du Chenqiu

2026.04.27

This study constructs an indoor environment comfort evaluation model based on human facial expression recognition by using image recognition, facial feature extraction, and machine learning algorithms. The facial expressions of 24 subjects are collected in an artificial climate chamber and the residual neural network (ResNet) is used to automatically extract features and conduct learning, resulting in a high validation set accuracy of 87.7%. Three expression databases of comfort level: comfortable, neutral, and uncomfortable are established. By learning the visual features of expression images under different overall comfort levels, the trained model can hence evaluate the comfort levels of personnel through facial expression recognition. Subsequently, the histogram of oriented gradients (HOG) algorithm is used to extract facial features, and the accuracy, training time, and generalization ability of 12 different machine learning algorithms are compared. It is found that the K-nearest neighbor (KNN) algorithm can reach the highest accuracy of 83.2% with less time. Consequently, the indoor environment comfort evaluation model based on facial expression recognition constructed in this paper can provide a scientific reference for relevant research on human comfort perception under the combined action of multiple factors in different thermoacoustic environments.